26 research outputs found

    Search snippets that include rich media

    Get PDF
    Information pertinent to a search query can be in the form of rich media, e.g., video, audio, images, text, etc. However, search snippets or summaries currently comprise mainly text. This disclosure describes techniques that surface rich media snippets within search results. The search snippets provided using the described techniques include relevant segments of rich media, e.g., audio or video, summarized to sections that are pertinent to the search query

    Context-dependent output volume for voice-controlled virtual assistant

    Get PDF
    Voice-controlled virtual assistants typically maintain a standard output volume level. In certain circumstances, this leads to a response from the virtual assistant that is either too loud or too low. For example, if the last-used context of the virtual assistant was a party with high ambient noise, then the virtual assistant continues responding too loudly even after the party has ended and the ambient noise level has dropped. This disclosure describes techniques to automatically adapt the output volume level of a virtual assistant based on current context, user preferences, user feedback, etc. A machine learning model predicts an optimum volume level for the virtual assistant, sets the volume level, and adapts it based on user feedback. With permission, user feedback serves as training data for predicting volume level in different contexts

    Customized dynamic application delivery based on user need

    Get PDF
    Presently, user devices download and install entire software applications even when only a part of the application functionality is used by a user of the device. As a result of the presence of functionality that is unlikely to be needed, download of the application likely uses more bandwidth. Further, the application code takes up more storage space on the device and uses more memory during runtime. This disclosure utilizes dynamic assembly to deliver an application package that includes only the functionality that fits user’s needs, device configuration, and context, as determined by a trained machine learning model

    On-device assistant to provide access to device functionality without physical access

    Get PDF
    This disclosure describes techniques to access functionality available only on a mobile device such as a smartphone or wearable device via alternate channels when the mobile device is not in close proximity to the user. The on-device assistant is enabled to access mobile specific functionality when the user and their mobile device are in different locations. Comparison of physical locations of the mobile device and user is performed when the user logs into their work terminal or accesses a work device, indicating the user is no longer in the vicinity of the mobile device. For example, the on-device assistant forwards incoming phone calls to a work phone and forwards push notifications to a work device through a chat application or web interface

    Dynamic generation of a custom application by combining parts of different applications

    Get PDF
    Presently, users need to use multiple different applications of the same service provider to access the different functionalities provided by each app, rather than use a single application. As a result, users need to download, install, and maintain a large number of applications on their user devices to utilize various services accessible via their account with the service provider. This disclosure utilizes dynamic assembly of application components to deliver an application package that includes the features of the user’s interest from multiple applications of multiple service providers, when service providers allow such combination

    Automatic obfuscation of sensitive content on locked devices

    Get PDF
    The content of notifications from the device operating system and applications is displayed on the screen of a device even when the device is locked. Such notifications make private or sensitive information accessible by other parties without unlocking the device. To counter this risk, device operating systems allow users to hide notification details when the device is locked. However, such hiding results in significant loss of understandability and utility of notifications. Per techniques of this disclosure, a trained machine learning based model is applied with specific user permission to identify sensitive content by analyzing pixels of the device lock screen image that includes notifications. The techniques may be incorporated within the device operating system or web application frameworks. If the inferred level of sensitivity is higher than a threshold value, the corresponding content is obfuscated when displaying the notification on a locked device

    Voice interface for interaction with notifications

    Get PDF
    This disclosure describes techniques for a user to interact with notifications using voice commands, e.g., when the user is unable to provide touch inputs to a device, such as when driving, cooking, or engaged in other activities. At an initial stage, templatized voice commands are used and additional commands are learned using machine learning techniques. Actions such as scrolling through notifications, filtering notifications by criteria, dismissing notifications, acting on notifications, etc. are made available via voice commands. Notifications are displayed or read out based on whether the user is in a context where they can view a display. A tutorial mode is provided that enables the user to provide guidance on interpreting new types of notification formats

    Selective lifelogging to record missed events of interest

    Get PDF
    A user can sometimes miss content or events of interest due to distraction or inattention. Rewinding and replaying to view the missed content or event may not be possible when the missed situation is live and not pre-recorded. Further, the user may view the content as a real-world experience rather than via a computing device. This disclosure describes techniques that make it possible to replay moments from the immediate past, thus enabling an experience for the real world similar to that of the rewind functionality for recorded content. With the user’s permission, a recording is triggered when it is determined that the user is likely to miss an upcoming event of interest, enabling the user to retrieve and play the missed event. A trained event interpretation model is used to determine the event of interest from the recorded data

    Transferable Deep Links

    Get PDF
    The disclosure describes techniques to automatically generate shareable deep links from applications and/or web pages. A system level application programming interface (API) provides automatically transferable deep links from application or web pages. Per the techniques, applications or web pages store a given state along with an associated hash. When a link to a shopping cart is shared by a user, the recipient sees the same content as the sender which is enabled by the use of the associated hash. While the applications or web pages control shareable status, the sharable deep link techniques are provided via a browser or operating system and are made available system-wide

    Sharing photos and associated text messages within messaging apps in the intended order

    Get PDF
    The order in which a photo and messages are delivered and displayed in a chat conversation is affected by processing and/or network delays as well as specific implementations of messaging applications. Receiving a photo out-of-sequence from text messages related to the photo can create confusion and/or misunderstanding and can ruin the intended effect of the sender’s communication. This disclosure describes the use of machine learning techniques to identify text messages that are associated with a shared photo and bundling those together as a joint set. Descriptive text for the photo is shown as a placeholder until the photo is fully transferred and displayed. Alternatively, or in addition, messages related to the photo that are sent after the photo is shared are shown to the recipient only after the recipient views the photo
    corecore